Skip to content

[compat] gpt_bridge compat transformers_5#75

Merged
Jintao-Huang merged 3 commits intomodelscope:mainfrom
Jintao-Huang:compat_gpt_bridge_transformers_5
Feb 26, 2026
Merged

[compat] gpt_bridge compat transformers_5#75
Jintao-Huang merged 3 commits intomodelscope:mainfrom
Jintao-Huang:compat_gpt_bridge_transformers_5

Conversation

@Jintao-Huang
Copy link
Copy Markdown
Collaborator

No description provided.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the gpt_bridge's compatibility with version 5 of the Hugging Face transformers library. The primary goal is to ensure accurate and robust handling of model state dictionaries, particularly for various Mixture of Experts (MoE) architectures, during conversions between different model formats. The changes involve refining the logic for identifying grouped experts and gate-up projections, making the conversion process more adaptable to the evolving structures of modern large language models.

Highlights

  • Enhanced Transformers 5.0 Compatibility: Updated the gpt_bridge to improve compatibility with the transformers library version 5.0, specifically addressing how Mixture of Experts (MoE) models are handled during state dictionary conversions.
  • Refactored MoE State Handling: Modified the _set_mlp_state method to dynamically determine hf_grouped and is_gate_up flags based on the conversion direction (to_mcore or to_hf) and the structure of the state dictionary or Hugging Face MLP module.
  • Introduced Model-Specific Grouping Logic: Added a new helper method _get_hf_grouped to provide model-specific overrides for hf_grouped and is_gate_up for a range of MoE models (e.g., Qwen2-MoE, Deepseek-V2, GLM4-MoE), ensuring correct state mapping.
  • Added Utility Imports: Included re and shutil modules, likely for advanced string pattern matching and file operations related to state dictionary processing.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/twinkle/model/megatron/model/gpt_bridge.py
    • Added re and shutil imports.
    • Introduced _get_hf_grouped method to determine grouping and gate-up status for specific MoE models.
    • Refactored _set_mlp_state to improve transformers 5.0 compatibility, especially for expert layers, by adjusting how hf_grouped and is_gate_up are determined during state conversion.
    • Moved hf_state_dict prefix removal for to_mcore conversion.
    • Updated logic for determining hf_grouped based on state dict keys when converting to_mcore.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces changes to gpt_bridge.py to improve compatibility with transformers version 5. The main changes involve refactoring the logic in _set_mlp_state to handle different model expert structures and introducing a new helper method _get_hf_grouped to centralize model-specific compatibility flags. The changes look good and improve the code's structure. I have a couple of suggestions to further improve maintainability and performance by extracting hardcoded values and pre-compiling a regular expression.

@Jintao-Huang Jintao-Huang merged commit 52c7cd9 into modelscope:main Feb 26, 2026
1 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants